Skip to content

[NFC][MLGO] Convert notes to proper RST note directives in MLGO.rst #146450

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 1, 2025

Conversation

svkeerthy
Copy link
Contributor

No description provided.

Copy link
Contributor Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

@svkeerthy svkeerthy changed the title Doc cleanup [NFC][MLGO] Convert notes to proper RST note directives in MLGO.rst Jul 1, 2025
@svkeerthy svkeerthy marked this pull request as ready for review July 1, 2025 01:23
Copy link
Contributor Author

svkeerthy commented Jul 1, 2025

Merge activity

  • Jul 1, 1:23 AM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Jul 1, 1:25 AM UTC: @svkeerthy merged this pull request with Graphite.

@llvmbot llvmbot added the mlgo label Jul 1, 2025
@llvmbot
Copy link
Member

llvmbot commented Jul 1, 2025

@llvm/pr-subscribers-mlgo

Author: S. VenkataKeerthy (svkeerthy)

Changes

Full diff: https://github.com/llvm/llvm-project/pull/146450.diff

1 Files Affected:

  • (modified) llvm/docs/MLGO.rst (+21-13)
diff --git a/llvm/docs/MLGO.rst b/llvm/docs/MLGO.rst
index 6f7467063552f..a33af82c287f2 100644
--- a/llvm/docs/MLGO.rst
+++ b/llvm/docs/MLGO.rst
@@ -15,11 +15,13 @@ Currently the following heuristics feature such integration:
 
 This document is an outline of the tooling and APIs facilitating MLGO.
 
-Note that tools for orchestrating ML training are not part of LLVM, as they are
-dependency-heavy - both on the ML infrastructure choice, as well as choices of
-distributed computing. For the training scenario, LLVM only contains facilities
-enabling it, such as corpus extraction, training data extraction, and evaluation
-of models during training.
+.. note::
+    
+  The tools for orchestrating ML training are not part of LLVM, as they are
+  dependency-heavy - both on the ML infrastructure choice, as well as choices of
+  distributed computing. For the training scenario, LLVM only contains facilities
+  enabling it, such as corpus extraction, training data extraction, and evaluation
+  of models during training.
 
 
 .. contents::
@@ -329,8 +331,10 @@ We currently feature 4 implementations:
   the neural network, together with its weights (essentially, loops performing
   matrix multiplications)
 
-NOTE: we are actively working on replacing this with an EmitC implementation
-requiring no out of tree build-time dependencies.
+.. note::
+    
+  we are actively working on replacing this with an EmitC implementation
+  requiring no out of tree build-time dependencies.
 
 - ``InteractiveModelRunner``. This is intended for training scenarios where the
   training algorithm drives compilation. This model runner has no special
@@ -531,9 +535,11 @@ implementation details.
 Building with ML support
 ========================
 
-**NOTE** For up to date information on custom builds, see the ``ml-*``
-`build bots <http://lab.llvm.org>`_. They are set up using 
-`like this <https://github.com/google/ml-compiler-opt/blob/main/buildbot/buildbot_init.sh>`_.
+.. note::
+  
+  For up to date information on custom builds, see the ``ml-*``
+  `build bots <http://lab.llvm.org>`_. They are set up using 
+  `like this <https://github.com/google/ml-compiler-opt/blob/main/buildbot/buildbot_init.sh>`_.
 
 Embed pre-trained models (aka "release" mode)
 ---------------------------------------------
@@ -567,9 +573,11 @@ You can also specify a URL for the path, and it is also possible to pre-compile
 the header and object and then just point to the precompiled artifacts. See for
 example ``LLVM_OVERRIDE_MODEL_HEADER_INLINERSIZEMODEL``.
 
-**Note** that we are transitioning away from the AOT compiler shipping with the
-tensorflow package, and to a EmitC, in-tree solution, so these details will
-change soon.
+.. note::
+
+  We are transitioning away from the AOT compiler shipping with the
+  tensorflow package, and to a EmitC, in-tree solution, so these details will
+  change soon.
 
 Using TFLite (aka "development" mode)
 -------------------------------------

@svkeerthy svkeerthy merged commit a2dc64c into main Jul 1, 2025
12 checks passed
@svkeerthy svkeerthy deleted the users/svkeerthy/07-01-doc_cleanup branch July 1, 2025 01:25
rlavaee pushed a commit to rlavaee/llvm-project that referenced this pull request Jul 1, 2025
rlavaee pushed a commit to rlavaee/llvm-project that referenced this pull request Jul 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants